由于其无监督的性质和下游任务的信息性特征表示,实例歧视自我监督的代表学习受到了受到关注的。在实践中,它通常使用比监督类的数量更多的负样本。然而,现有分析存在不一致;从理论上讲,大量的负样本在下游监督任务上降低了分类性能,同时凭经验,它们提高了性能。我们提供了一种新颖的框架,用于使用优惠券收集器的问题分析关于负样本的经验结果。我们的界限可以通过增加负样本的数量来隐立地纳入自我监督损失中的下游任务的监督损失。我们确认我们的拟议分析持有现实世界基准数据集。
translated by 谷歌翻译
Due to the usefulness in data enrichment for data analysis tasks, joinable table discovery has become an important operation in data lake management. Existing approaches target equi-joins, the most common way of combining tables for creating a unified view, or semantic joins, which tolerate misspellings and different formats to deliver more join results. They are either exact solutions whose running time is linear in the sizes of query column and target table repository or approximate solutions lacking precision. In this paper, we propose Deepjoin, a deep learning model for accurate and efficient joinable table discovery. Our solution is an embedding-based retrieval, which employs a pre-trained language model (PLM) and is designed as one framework serving both equi- and semantic joins. We propose a set of contextualization options to transform column contents to a text sequence. The PLM reads the sequence and is fine-tuned to embed columns to vectors such that columns are expected to be joinable if they are close to each other in the vector space. Since the output of the PLM is fixed in length, the subsequent search procedure becomes independent of the column size. With a state-of-the-art approximate nearest neighbor search algorithm, the search time is logarithmic in the repository size. To train the model, we devise the techniques for preparing training data as well as data augmentation. The experiments on real datasets demonstrate that by training on a small subset of a corpus, Deepjoin generalizes to large datasets and its precision consistently outperforms other approximate solutions'. Deepjoin is even more accurate than an exact solution to semantic joins when evaluated with labels from experts. Moreover, when equipped with a GPU, Deepjoin is up to two orders of magnitude faster than existing solutions.
translated by 谷歌翻译
We study the problem of sharing as many branching conditions of a given forest classifier or regressor as possible while keeping classification performance. As a constraint for preventing from accuracy degradation, we first consider the one that the decision paths of all the given feature vectors must not change. For a branching condition that a value of a certain feature is at most a given threshold, the set of values satisfying such constraint can be represented as an interval. Thus, the problem is reduced to the problem of finding the minimum set intersecting all the constraint-satisfying intervals for each set of branching conditions on the same feature. We propose an algorithm for the original problem using an algorithm solving this problem efficiently. The constraint is relaxed later to promote further sharing of branching conditions by allowing decision path change of a certain ratio of the given feature vectors or allowing a certain number of non-intersected constraint-satisfying intervals. We also extended our algorithm for both the relaxations. The effectiveness of our method is demonstrated through comprehensive experiments using 21 datasets (13 classification and 8 regression datasets in UCI machine learning repository) and 4 classifiers/regressors (random forest, extremely randomized trees, AdaBoost and gradient boosting).
translated by 谷歌翻译
这项工作提出了一种自我监督的方法,用于学习密集的语义上丰富的视觉概念嵌入式,用于通过在NLP中学习Word Embeddings的方法启发的图像。我们的方法通过产生更多富有表现力的嵌入来提高现有工作,并通过适用于高分辨率图像。将自然图像的生成作为一种随机过程,其中一组潜在的视觉概念产生可观察像素外观,我们的方法被配制,以从像素到概念的反向映射。我们的方法大大提高了自我监督学习对密集嵌入映射的有效性,通过将超装配作为自然等级从像素从像素向一小组视觉相干区域进行了向上。其他贡献是具有非均匀形状的区域上下文掩蔽,匹配视觉相干的补丁和基于复杂的视图采样,由屏蔽语言模型启发。通过显着改善Coco(+12.94 miou,+87.6 \%)和城市景观(+16.52 miou,+134.2 \%)的最先进的代表性质量基准来证明了我们密集嵌入的增强的表现力。结果表明,未参加工作未能证明的较好的缩放和域泛化性能。
translated by 谷歌翻译
我们提出了一种新颖的方法,可以在没有直接监督或对困难的注释的情况下确定视觉问题回答(VQA)的难度。先前的工作已经考虑了人类注释者的基础答案的多样性。相反,我们根据多个不同VQA模型的行为分析了视觉问题的难度。我们建议通过三个不同的模型获得预测的答案分布的熵值:一种基线方法,该方法将作为输入图像和问题采用,以及两个仅作为输入图像和仅提出问题的变体。我们使用简单的K-均值来聚集VQA V2验证集的视觉问题。然后,我们使用最先进的方法来确定每个集群的答案分布的准确性和熵。提出的方法的一个好处是,不需要对难度的注释,因为每个集群的准确性反映了属于它的视觉问题的难度。我们的方法可以识别出难以通过最新方法正确回答的困难视觉问题的集群。对VQA V2数据集的详细分析表明,1)所有方法在最困难的群集上表现出较差的性能(大约10 \%精度),2)随着群集难度的增加,不同方法预测的答案开始差异,3 )聚类熵的值与群集精度高度相关。我们表明,我们的方法具有能够在没有地面真相的情况下评估视觉问题的难度(\ ie,VQA V2的测试集),通过将它们分配给其中一个簇来评估视觉问题的难度。我们希望这可以刺激研究和新算法的新方向发展。
translated by 谷歌翻译